首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   62294篇
  免费   7884篇
  国内免费   5911篇
电工技术   5093篇
技术理论   9篇
综合类   6690篇
化学工业   1635篇
金属工艺   799篇
机械仪表   3768篇
建筑科学   3709篇
矿业工程   1796篇
能源动力   869篇
轻工业   799篇
水利工程   1755篇
石油天然气   2966篇
武器工业   803篇
无线电   9091篇
一般工业技术   3370篇
冶金工业   1215篇
原子能技术   765篇
自动化技术   30957篇
  2024年   139篇
  2023年   810篇
  2022年   1700篇
  2021年   2155篇
  2020年   2287篇
  2019年   1804篇
  2018年   1608篇
  2017年   2010篇
  2016年   2276篇
  2015年   2647篇
  2014年   4252篇
  2013年   3886篇
  2012年   4576篇
  2011年   4936篇
  2010年   3831篇
  2009年   3838篇
  2008年   4297篇
  2007年   4806篇
  2006年   4084篇
  2005年   3748篇
  2004年   3195篇
  2003年   2732篇
  2002年   2080篇
  2001年   1569篇
  2000年   1354篇
  1999年   985篇
  1998年   784篇
  1997年   630篇
  1996年   513篇
  1995年   479篇
  1994年   381篇
  1993年   278篇
  1992年   199篇
  1991年   201篇
  1990年   155篇
  1989年   125篇
  1988年   93篇
  1987年   72篇
  1986年   62篇
  1985年   89篇
  1984年   60篇
  1983年   74篇
  1982年   64篇
  1981年   41篇
  1980年   23篇
  1979年   39篇
  1978年   17篇
  1977年   22篇
  1976年   13篇
  1959年   7篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
21.
随着海洋资源勘探和海洋污染物监控工作的开展,水文数据的监测和采集等已经成为重要的研究方向。其中,水下无线传感器网络在水文数据采集过程中起着举足轻重的作用。本文研究的是水下无线传感器二维监测网络模型中,传感器节点数据采集的问题,其设计方法是通过自组织映射(Self-organizing mapping,SOM)对传感器节点进行路径最优化处理,结合优化的路径图形和K-means算法找到路径内部聚合点,利用聚合点和传感器的节点得到传感器通信半径内的数据采集点,最后通过SOM得到水下机器人(Autonomous underwater vehicle,AUV)到各个数据采集点采集数据的最优路径。经过实验验证,在水下1 200 m×1 750 m范围内布置52个传感器节点的情景下,数据采集点相比于传感器节点路径规划采用相同的采集顺序得到的路径优化了6.7%;对数据采集点重新进行自组织路径规划得到的路径比传感器结点路径的最优解提高了12.2%。增加传感器节点的数量,其结果也大致相同,因此采用该方法可以提高水下机器人采集数据的效率。  相似文献   
22.
5G-NR和B5G系统都需能高效地支持小微数据包(块)的传输,且在不同的无线传输技术(如宽带或超低时延)之间快速灵活地切换。系统梳理和比较和小微数据包(块)相关的多种无线传输技术,如两步RACH、预配置授权和公共资源竞争等,分析它们对3GPP各个系统和各个RAN协议的影响和复杂度优劣等,并进一步展望其未来发展和应用趋势。  相似文献   
23.
24.
ABSTRACT

It is important to perform neutron transport simulations with accurate nuclear data in the neutronics design of a fusion reactor. However, absolute values of large-angle scattering cross sections vary among nuclear data libraries even for well-examined nuclide of iron. Benchmark experiments focusing on large-angle scattering cross sections were thus performed to confirm the correctness of nuclear data libraries. The series benchmark experiments were performed at a DT neutron source facility, OKTAVIAN of Osaka University, Japan, by the unique experimental system established by the authors’ group, which can extract only the contribution of large-angle scattering reactions. This system consists of two shadow bars, target plate (iron), and neutron detector (niobium). Two types of shadow bars were used and four irradiations were conducted for one experiment, so that contribution of room-return neutrons was effectively removed and only large-angle scattering neutrons were extracted from the measured four Nb reaction rates. The obtained experimental results were compared with calculations for five nuclear data libraries including JENDL-4.0, JEFF.-3.3, FENDL-3.1, ENDF/B- VII, and recently released ENDF/B-VIII. It was found from the comparison that ENDF/B-VIII showed the best result, though ENDF/B-VII showed overestimation and others are in large underestimation at 14 MeV.  相似文献   
25.
Sorting-based reversible data hiding (RDH) methods like pixel-value-ordering (PVO) can predict pixel values accurately and achieve an extremely low distortion on the embedded image. However, the excellent performance of these methods was not well explained in previous works, and there are unexploited common points among them. In this paper, we propose a general multi-predictor (GMP) framework to summarize PVO-based RDH methods and explain their high prediction accuracy. Moreover, by utilizing the proposed GMP framework, a more efficient sorting-based RDH method is given as an example to show the generality and applicability of our framework. Comparing with other PVO-based methods, the proposed example method can achieve significant improvement in embedding performance. It is hopeful that more efficient sorting-based RDH algorithms can be designed according to our proposed framework by designing better predictors and their combination methods.  相似文献   
26.
To efficiently link the continuum mechanics for rocks with the structural statistics of rock masses,a theoretical and methodological system called the statistical mechanics of rock masses(SMRM)was developed in the past three decades.In SMRM,equivalent continuum models of stressestrain relationship,strength and failure probability for jointed rock masses were established,which were based on the geometric probability models characterising the rock mass structure.This follows the statistical physics,the continuum mechanics,the fracture mechanics and the weakest link hypothesis.A general constitutive model and complete stressestrain models under compressive and shear conditions were also developed as the derivatives of the SMRM theory.An SMRM calculation system was then developed to provide fast and precise solutions for parameter estimations of rock masses,such as full-direction rock quality designation(RQD),elastic modulus,Coulomb compressive strength,rock mass quality rating,and Poisson’s ratio and shear strength.The constitutive equations involved in SMRM were integrated into a FLAC3D based numerical module to apply for engineering rock masses.It is also capable of analysing the complete deformation of rock masses and active reinforcement of engineering rock masses.Examples of engineering applications of SMRM were presented,including a rock mass at QBT hydropower station in northwestern China,a dam slope of Zongo II hydropower station in D.R.Congo,an open-pit mine in Dexing,China,an underground powerhouse of Jinping I hydropower station in southwestern China,and a typical circular tunnel in Lanzhou-Chongqing railway,China.These applications verified the reliability of the SMRM and demonstrated its applicability to broad engineering issues associated with jointed rock masses.  相似文献   
27.
Any knowledge extraction relies (possibly implicitly) on a hypothesis about the modelled-data dependence. The extracted knowledge ultimately serves to a decision-making (DM). DM always faces uncertainty and this makes probabilistic modelling adequate. The inspected black-box modeling deals with “universal” approximators of the relevant probabilistic model. Finite mixtures with components in the exponential family are often exploited. Their attractiveness stems from their flexibility, the cluster interpretability of components and the existence of algorithms for processing high-dimensional data streams. They are even used in dynamic cases with mutually dependent data records while regression and auto-regression mixture components serve to the dependence modeling. These dynamic models, however, mostly assume data-independent component weights, that is, memoryless transitions between dynamic mixture components. Such mixtures are not universal approximators of dynamic probabilistic models. Formally, this follows from the fact that the set of finite probabilistic mixtures is not closed with respect to the conditioning, which is the key estimation and predictive operation. The paper overcomes this drawback by using ratios of finite mixtures as universally approximating dynamic parametric models. The paper motivates them, elaborates their approximate Bayesian recursive estimation and reveals their application potential.  相似文献   
28.
An effective practical approach that allows not only a significant reduction in the scope of practical experiments in the course of studying suspension separation processes in hydrocyclones, but also makes it possible to assess the intensity of random components of the processes and define the interrelation between such components and hydrodynamics of flows in a hydrocyclone is presented. Within the frames of the developed probabilistic‐statistical model of suspension separation in hydrocyclones on the basis of statistical self‐similarity properties, a relationship was found between determined and random components of the processes. This allowed transitioning from three‐parameter probability density functions for suspension particles in hydrocyclones to two‐parameter functions; thus significantly improving the efficiency of practical application of the developed model.  相似文献   
29.
This study proposes a data‐driven operational control framework using machine learning‐based predictive modeling with the aim of decreasing the energy consumption of a natural gas sweetening process. This multi‐stage framework is composed of the following steps: (a) a clustering algorithm based on Density‐Based Spatial Clustering of Applications with Noise methodology is implemented to characterize the sampling space of all possible states of the operation and to determine the operational modes of the gas sweetening unit, (b) the lowest steam consumption of each operational mode is selected as a reference for operational control of the gas sweetening process, and (c) a number of high‐accuracy regression models are developed using the Gradient Boosting Machines algorithm for predicting the controlled parameters and output variables. This framework presents an operational control strategy that provides actionable insights about the energy performance of the current operations of the unit and also suggests the potential of energy saving for gas treating plant operators. The ultimate goal is to leverage this data‐driven strategy in order to identify the achievable energy conservation opportunity in such plants. The dataset for this research study consists of 29 817 records that were sampled over the course of 3 years from a gas train in the South Pars Gas Complex. Furthermore, our offline analysis demonstrates that there is a potential of 8% energy saving, equivalent to 5 760 000 Nm3 of natural gas consumption reduction, which can be achieved by mapping the steam consumption states of the unit to the best energy performances predicted by the proposed framework.  相似文献   
30.
Model building and parameter estimation are traditional concepts widely used in chemical, biological, metallurgical, and manufacturing industries. Early modeling methodologies focused on mathematically capturing the process knowledge and domain expertise of the modeler. The models thus developed are termed first principles models (or white-box models). Over time, computational power became cheaper, and massive amounts of data became available for modeling. This led to the development of cutting edge machine learning models (black-box models) and artificial intelligence (AI) techniques. Hybrid models (gray-box models) are a combination of first principles and machine learning models. The development of hybrid models has captured the attention of researchers as this combines the best of both modeling paradigms. Recent attention to this field stems from the interest in explainable AI (XAI), a critical requirement as AI systems become more pervasive. This work aims at identifying and categorizing various hybrid models available in the literature that integrate machine-learning models with different forms of domain knowledge. Benefits such as enhanced predictive power, extrapolation capabilities, and other advantages of combining the two approaches are summarized. The goal of this article is to consolidate the published corpus in the area of hybrid modeling and develop a comprehensive framework to understand the various techniques presented. This framework can further be used as the foundation to explore rational associations between several models.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号